tactile information
Gentle Object Retraction in Dense Clutter Using Multimodal Force Sensing and Imitation Learning
Brouwer, Dane, Citron, Joshua, Nolte, Heather, Bohg, Jeannette, Cutkosky, Mark
Dense collections of movable objects are common in everyday spaces-from cabinets in a home to shelves in a warehouse. Safely retracting objects from such collections is difficult for robots, yet people do it frequently, leveraging learned experience in tandem with vision and non-prehensile tactile sensing on the sides and backs of their hands and arms. We investigate the role of contact force sensing for training robots to gently reach into constrained clutter and extract objects. The available sensing modalities are (1) "eye-in-hand" vision, (2) proprioception, (3) non-prehensile triaxial tactile sensing, (4) contact wrenches estimated from joint torques, and (5) a measure of object acquisition obtained by monitoring the vacuum line of a suction cup. We use imitation learning to train policies from a set of demonstrations on randomly generated scenes, then conduct an ablation study of wrench and tactile information. We evaluate each policy's performance across 40 unseen environment configurations. Policies employing any force sensing show fewer excessive force failures, an increased overall success rate, and faster completion times. The best performance is achieved using both tactile and wrench information, producing an 80% improvement above the baseline without force information.
DexSkin: High-Coverage Conformable Robotic Skin for Learning Contact-Rich Manipulation
Wistreich, Suzannah, Shi, Baiyu, Tian, Stephen, Clarke, Samuel, Nath, Michael, Xu, Chengyi, Bao, Zhenan, Wu, Jiajun
Human skin provides a rich tactile sensing stream, localizing intentional and unintentional contact events over a large and contoured region. Replicating these tactile sensing capabilities for dexterous robotic manipulation systems remains a longstanding challenge. In this work, we take a step towards this goal by introducing DexSkin. DexSkin is a soft, conformable capacitive electronic skin that enables sensitive, localized, and calibratable tactile sensing, and can be tailored to varying geometries. We demonstrate its efficacy for learning downstream robotic manipulation by sensorizing a pair of parallel jaw gripper fingers, providing tactile coverage across almost the entire finger surfaces. We empirically evaluate DexSkin's capabilities in learning challenging manipulation tasks that require sensing coverage across the entire surface of the fingers, such as reorienting objects in hand and wrapping elastic bands around boxes, in a learning-from-demonstration framework. We then show that, critically for data-driven approaches, DexSkin can be calibrated to enable model transfer across sensor instances, and demonstrate its applicability to online reinforcement learning on real robots. Our results highlight DexSkin's suitability and practicality for learning real-world, contact-rich manipulation. Please see our project webpage for videos and visualizations: https://dex-skin.github.io/.
- North America > United States > Alabama (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
Classification of Vision-Based Tactile Sensors: A Review
Li, Haoran, Lin, Yijiong, Lu, Chenghua, Yang, Max, Psomopoulou, Efi, Lepora, Nathan F
-- Vision-based tactile sensors (VBTS) have gained widespread application in robotic hands, grippers and prosthetics due to their high spatial resolution, low manufacturing costs, and ease of customization. While VBTSs have common design features, such as a camera module, they can differ in a rich diversity of sensing principles, material compositions, multimodal approaches, and data interpretation methods. Here, we propose a novel classification of VBTS that categorizes the technology into two primary sensing principles based on the underlying transduction of contact into a tactile image: the Marker-Based Transduction Principle and the Intensity-Based Transduction Principle. Marker-Based Transduction interprets tactile information by detecting marker displacement and changes in marker density. Depending on the design of the contact module, Marker-Based Transduction can be further divided into two subtypes: Simple Marker-Based (SMB) and Morphological Marker-Based (MMB) mechanisms. Similarly, the Intensity-Based Transduction Principle encompasses the Reflective Layer-based (RLB) and Transparent Layer-Based (TLB) mechanisms. This paper provides a comparative study of the hardware characteristics of these four types of sensors including various combination types, and discusses the commonly used methods for interpreting tactile information. This comparison reveals some current challenges faced by VBTS technology and directions for future research. In robotic systems, tactile sensing is fundamental for enabling robots to interact with their environment through physical contact. By delivering real-time tactile feedback, such as object stiffness, local force, slip and contact position feedback, this capability empowers robotic systems to achieve precise object manipulation while preventing damage [1]-[4]. CL, HL and YL were supported by the the China Scholarship Council and Bristol joint scholarship. EP and NL were supported by the Horizon Europe research and innovation program under grant agreement No. 101120823 (MANiBOT) and the Royal Society International Collaboration Awards (South Korea). NL was also supported by an award from ARIA on'Democratising Hardware And Control For Robot Dexterity'. Lepora) HL is with School of Robotics, Xi'an Jiaotong-Liverpool University, China, and was with the School of Engineering Mathematics and T ech-nology, and Bristol Robotics Laboratory, University of Bristol, Bristol, U.K. (Email: haoran.li@xjtlu.edu.cn). YL, CL, MY, EP, and NL are with the School of Engineering Mathematics and T echnology, and Bristol Robotics Laboratory, University of Bristol, Bristol, U.K. (Email: {yijiong.lin, Traditional electronic technologies such as piezoelectric and piezoresistive sensor arrays have been considered promising due to their high temporal resolution and thin profiles.
- Europe > United Kingdom > England > Bristol (0.54)
- Asia > China > Shaanxi Province > Xi'an (0.24)
- Asia > South Korea (0.24)
- (9 more...)
- Education (1.00)
- Health & Medicine (0.93)
Tightly-Coupled LiDAR-IMU-Leg Odometry with Online Learned Leg Kinematics Incorporating Foot Tactile Information
Okawara, Taku, Koide, Kenji, Takanose, Aoki, Oishi, Shuji, Yokozuka, Masashi, Uno, Kentaro, Yoshida, Kazuya
In this letter, we present tightly coupled LiDAR-IMU-leg odometry, which is robust to challenging conditions such as featureless environments and deformable terrains. We developed an online learning-based leg kinematics model named the neural leg kinematics model, which incorporates tactile information (foot reaction force) to implicitly express the nonlinear dynamics between robot feet and the ground. Online training of this model enhances its adaptability to weight load changes of a robot (e.g., assuming delivery or transportation tasks) and terrain conditions. According to the \textit{neural adaptive leg odometry factor} and online uncertainty estimation of the leg kinematics model-based motion predictions, we jointly solve online training of this kinematics model and odometry estimation on a unified factor graph to retain the consistency of both. The proposed method was verified through real experiments using a quadruped robot in two challenging situations: 1) a sandy beach, representing an extremely featureless area with a deformable terrain, and 2) a campus, including multiple featureless areas and terrain types of asphalt, gravel (deformable terrain), and grass. Experimental results showed that our odometry estimation incorporating the \textit{neural leg kinematics model} outperforms state-of-the-art works. Our project page is available for further details: https://takuokawara.github.io/RAL2025_project_page/
TLA: Tactile-Language-Action Model for Contact-Rich Manipulation
Hao, Peng, Zhang, Chaofan, Li, Dingzhe, Cao, Xiaoge, Hao, Xiaoshuai, Cui, Shaowei, Wang, Shuo
Significant progress has been made in vision-language models. However, language-conditioned robotic manipulation for contact-rich tasks remains underexplored, particularly in terms of tactile sensing. To address this gap, we introduce the Tactile-Language-Action (TLA) model, which effectively processes sequential tactile feedback via cross-modal language grounding to enable robust policy generation in contact-intensive scenarios. In addition, we construct a comprehensive dataset that contains 24k pairs of tactile action instruction data, customized for fingertip peg-in-hole assembly, providing essential resources for TLA training and evaluation. Our results show that TLA significantly outperforms traditional imitation learning methods (e.g., diffusion policy) in terms of effective action generation and action accuracy, while demonstrating strong generalization capabilities by achieving over 85\% success rate on previously unseen assembly clearances and peg shapes. We publicly release all data and code in the hope of advancing research in language-conditioned tactile manipulation skill learning. Project website: https://sites.google.com/view/tactile-language-action/
Focused Blind Switching Manipulation Based on Constrained and Regional Touch States of Multi-Fingered Hand Using Deep Learning
Funabashi, Satoshi, Hiramoto, Atsumu, Chiba, Naoya, Schmitz, Alexander, Kulkarni, Shardul, Ogata, Tetsuya
To achieve a desired grasping posture (including object position and orientation), multi-finger motions need to be conducted according to the the current touch state. Specifically, when subtle changes happen during correcting the object state, not only proprioception but also tactile information from the entire hand can be beneficial. However, switching motions with high-DOFs of multiple fingers and abundant tactile information is still challenging. In this study, we propose a loss function with constraints of touch states and an attention mechanism for focusing on important modalities depending on the touch states. The policy model is AE-LSTM which consists of Autoencoder (AE) which compresses abundant tactile information and Long Short-Term Memory (LSTM) which switches the motion depending on the touch states. Motion for cap-opening was chosen as a target task which consists of subtasks of sliding an object and opening its cap. As a result, the proposed method achieved the best success rates with a variety of objects for real time cap-opening manipulation. Furthermore, we could confirm that the proposed model acquired the features of each subtask and attention on specific modalities.
The Role of Tactile Sensing for Learning Reach and Grasp
Zhang, Boya, Andrussow, Iris, Zell, Andreas, Martius, Georg
Stable and robust robotic grasping is essential for current and future robot applications. In recent works, the use of large datasets and supervised learning has enhanced speed and precision in antipodal grasping. However, these methods struggle with perception and calibration errors due to large planning horizons. To obtain more robust and reactive grasping motions, leveraging reinforcement learning combined with tactile sensing is a promising direction. Yet, there is no systematic evaluation of how the complexity of force-based tactile sensing affects the learning behavior for grasping tasks. This paper compares various tactile and environmental setups using two model-free reinforcement learning approaches for antipodal grasping. Our findings suggest that under imperfect visual perception, various tactile features improve learning outcomes, while complex tactile inputs complicate training.
- North America > United States (0.14)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
Sensorimotor Control Strategies for Tactile Robotics
Donato, Enrico, Preti, Matteo Lo, Beccai, Lucia, Falotico, Egidio
Physical contacts are at the base of each embodied interaction. As for living beings, also robots continuously establish diverse contacts to fulfill their tasks. Over the last decades, one of the bold goals of robotics research has been to provide artificial agents with dexterity and adaptability - typical of biological systems - while interacting with their surroundings. Despite the huge work and the excellent outputs in this field, such capabilities still require hard refinements and studies to be fully delivered on our robots. The scientific contribution to this objective builds upon three pillars: the design of an appropriate embodiment - concerning its morphology, actuation strategy, and sensing technology; feature extraction algorithms from tactile signals to build a perception model of the experience; closed-loop robot control strategies that drive robot decisions according to either raw tactile feedback or perceptual representations.
- Health & Medicine (0.93)
- Energy > Oil & Gas > Upstream (0.46)
- Information Technology > Artificial Intelligence > Robots > Manipulation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.93)
Curriculum Is More Influential Than Haptic Information During Reinforcement Learning of Object Manipulation Against Gravity
Ojaghi, Pegah, Mir, Romina, Marjaninejad, Ali, Erwin, Andrew, Wehner, Michael, Valero-Cueva, Francisco J
Learning to lift and rotate objects with the fingertips is necessary for autonomous in-hand dexterous manipulation. In our study, we explore the impact of various factors on successful learning strategies for this task. Specifically, we investigate the role of curriculum learning and haptic feedback in enabling the learning of dexterous manipulation. Using model-free Reinforcement Learning, we compare different curricula and two haptic information modalities (No-tactile vs. 3D-force sensing) for lifting and rotating a ball against gravity with a three-fingered simulated robotic hand with no visual input. Note that our best results were obtained when we used a novel curriculum-based learning rate scheduler, which adjusts the linearly-decaying learning rate when the reward is changed as it accelerates convergence to higher rewards. Our findings demonstrate that the choice of curriculum greatly biases the acquisition of different features of dexterous manipulation. Surprisingly, successful learning can be achieved even in the absence of tactile feedback, challenging conventional assumptions about the necessity of haptic information for dexterous manipulation tasks. We demonstrate the generalizability of our results to balls of different weights and sizes, underscoring the robustness of our learning approach. This work, therefore, emphasizes the importance of the choice curriculum and challenges long-held notions about the need for tactile information to autonomously learn in-hand dexterous manipulation.
- North America > United States > California > Los Angeles County > Los Angeles (0.46)
- North America > United States > Wisconsin > Dane County > Madison (0.14)
- North America > United States > California > Santa Cruz County > Santa Cruz (0.14)
- (3 more...)
- Leisure & Entertainment (1.00)
- Education (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
Sim2Real Manipulation on Unknown Objects with Tactile-based Reinforcement Learning
Su, Entong, Jia, Chengzhe, Qin, Yuzhe, Zhou, Wenxuan, Macaluso, Annabella, Huang, Binghao, Wang, Xiaolong
Using tactile sensors for manipulation remains one of the most challenging problems in robotics. At the heart of these challenges is generalization: How can we train a tactile-based policy that can manipulate unseen and diverse objects? In this paper, we propose to perform Reinforcement Learning with only visual tactile sensing inputs on diverse objects in a physical simulator. By training with diverse objects in simulation, it enables the policy to generalize to unseen objects. However, leveraging simulation introduces the Sim2Real transfer problem. To mitigate this problem, we study different tactile representations and evaluate how each affects real-robot manipulation results after transfer. We conduct our experiments on diverse real-world objects and show significant improvements over baselines for the pivoting task. Our project page is available at https://tactilerl.github.io/.
- North America > United States > West Virginia > Monongalia County > Morgantown (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- North America > United States > Illinois > Champaign County > Champaign (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)